Goto

Collaborating Authors

 ai chatbot


Race for AI is making Hindenburg-style disaster 'a real risk', says leading expert

The Guardian

Race for AI is making Hindenburg-style disaster'a real risk', says leading expert The race to get artificial intelligence to market has raised the risk of a Hindenburg-style disaster that shatters global confidence in the technology, a leading researcher has warned. Michael Wooldridge, a professor of AI at Oxford University, said the danger arose from the immense commercial pressures that technology firms were under to release new AI tools, with companies desperate to win customers before the products' capabilities and potential flaws are fully understood. The surge in AI chatbots with guardrails that are easily bypassed showed how commercial incentives were prioritised over more cautious development and safety testing, he said. "It's the classic technology scenario," he said. "You've got a technology that's very, very promising, but not as rigorously tested as you would like it to be, and the commercial pressure behind it is unbearable."


No free pass for internet platforms on child safety, Starmer says

BBC News

No online platform will get a free pass on children's safety on the internet in new plans, Prime Minister Sir Keir Starmer has said. The government is pledging to close loopholes in existing laws designed to protect children online and will consult on a social media ban for under-16s as part of plans for online safety. There are also plans to introduce powers to speedily change the law in response to developing online behaviours, and to update legislation to preserve children's social media and online data - as campaigned for by the group Jools' Law. Opponents accused the government of inaction, and have called for Parliament to be given a vote on the social media ban for children. The government had already said it would launch the public consultation in March, seeking opinions about restricting children's access to AI chatbots and limiting infinite scrolling features for children - also known as doomscrolling.


Starmer to extend online safety rules to AI chatbots after Grok scandal

The Guardian

The government said it would close a legal loophole in the Online Safety Act. The government said it would close a legal loophole in the Online Safety Act. Starmer to announce'crackdown on vile illegal content created by AI' after scandal involving Elon Musk's Grok tool Makers of AI chatbots that put children at risk will face massive fines or even see their services blocked in the UK under law changes to be announced by Keir Starmer on Monday. Emboldened by Elon Musk's X stopping its Grok AI tool from creating sexualised images of real people in the UK after public outrage last month, ministers are planning a "crackdown on vile illegal content created by AI". With more and more children using chatbots for everything from help with their homework to mental health support, the government said it would "move fast to shut a legal loophole and force all AI chatbot providers to abide by illegal content duties in the Online Safety Act or face the consequences of breaking the law".


'I spoke to ChatGPT 8 times a day' - Gen Z's loneliness 'crisis'

BBC News

'I spoke to ChatGPT 8 times a day' - Gen Z's loneliness'crisis' Working from home after years spent alone over Covid lockdowns, 23-year-old Paisley said he began to feel trapped, and felt only AI could help him. I lost the ability to socialise, he said, and like many in Gen Z, he turned to AI for company. At one point, I was talking to ChatGPT six, seven, eight times a day about my problems, I just couldn't get away from it, it was a dangerous slope. He shared his experience of loneliness with 22-year-old documentary maker Sam Tullen, who told the BBC what Paisley was going through was part of a wider Gen Z loneliness crisis. Gen Z, a term used for those born between 1997 and 2012, often referred to as the first'digital native' generation.


Meta Seeks to Bar Mentions of Mental Health--and Zuckerberg's Harvard Past--From Child Safety Trial

WIRED

The trial starts soon in New Mexico's case against Meta--and the company is pulling out all the stops to protect its reputation. As Meta heads to trial in the state of New Mexico for allegedly failing to protect minors from sexual exploitation, the company is making an aggressive push to have certain information excluded from the court proceedings. The company has petitioned the judge to exclude certain research studies and articles around social media and youth mental health; any mention of a recent high-profile case involving teen suicide and social media content; and any references to Meta's financial resources, the personal activities of employees, and Mark Zuckerberg's time as a student at Harvard University. Meta's requests to exclude information, known as motions in limine, are a standard part of pretrial proceedings, in which a party can ask a judge to determine in advance which evidence or arguments are permissible in court. This is to ensure the jury is presented with facts and not irrelevant or prejudicial information and that the defendant is granted a fair trial.


Can AI chatbots trigger psychosis in vulnerable people?

FOX News

This material may not be published, broadcast, rewritten, or redistributed. Quotes displayed in real-time or delayed by at least 15 minutes. Market data provided by Factset . Powered and implemented by FactSet Digital Solutions . Mutual Fund and ETF data provided by Refinitiv Lipper .


New Scientist changed the UK's freedom of information laws in 2025

New Scientist

New Scientist changed the UK's freedom of information laws in 2025 By requesting copies of the then-UK technology secretary's ChatGPT logs, New Scientist set a precedent for how freedom of information laws apply to chatbot interactions, helping to hold governments to account Our successful request for Peter Kyle's ChatGPT logs stunned observers When I fired off an email at the start of 2025, I hadn't intended to set a legal precedent for how the UK government handles its interactions with AI chatbots, but that is exactly what happened. It all began in January when I read an interview with the then-UK tech secretary Peter Kyle in . Trying to suggest he used first-hand the technology his department was set up to regulate, Kyle said that he would often have conversations with ChatGPT. AI may blunt our thinking skills - here's what you can do about it That got me wondering: could I obtain his chat history? Freedom of information (FOI) laws are often deployed to obtain emails and other documents produced by public bodies, but past precedent has suggested that some private data - such as search queries - aren't eligible for release in this way. I was interested to see which way the chatbot conversations would be categorised.


Are these AI prompts damaging your thinking skills?

BBC News

Are these AI prompts damaging your thinking skills? What was the last thing you asked an AI chatbot to do for you? Maybe you asked it for an essay structure to help answer a tricky question, provide an insightful analysis of a chunky data set, or to check if your cover letter matches the job description. Some experts worry that outsourcing these kinds of tasks means your brain is working less - and could even be harming your critical thinking and problem-solving skills. Earlier this year, the Massachusetts Institute of Technology (MIT) published a study showing that people who used ChatGPT to write essays showed less activity in brain networks associated with cognitive processing while undertaking the exercise.


Nearly one-third of teens use AI chatbots daily

Engadget

GPU prices could follow RAM's big rise Of the major companies, OpenAI's ChatGPT has the biggest reach among younger users. AI chatbots haven't come close to replacing teens' social media habits, but they are playing a significant role in their online habits. Nearly one-third of US teens report using AI chatbots daily or more, according to a new report from Pew Research. The report is the first from Pew to specifically examine how often teens are using AI overall, and was published alongside its latest research on teens' social media use. It's based on an online survey of 1,458 US teens who were polled between September 25 to October 9, 2025.


'I feel it's a friend': quarter of teenagers turn to AI chatbots for mental health support

The Guardian

About 40% of 13-to 17-year-olds in England and Wales affected by youth violence are turning to AI chatbots for mental health support. About 40% of 13-to 17-year-olds in England and Wales affected by youth violence are turning to AI chatbots for mental health support. 'I feel it's a friend': quarter of teenagers turn to AI chatbots for mental health support It was after one friend was shot and another stabbed, both fatally, that Shan asked ChatGPT for help. She had tried conventional mental health services but "chat", as she came to know her AI "friend", felt safer, less intimidating and, crucially, more available when it came to handling the trauma from the deaths of her young friends. As she started consulting the AI model, the Tottenham teenager joined about 40% of 13-to 17-year-olds in England and Wales affected by youth violence who are turning to AI chatbots for mental health support, according to research among more than 11,000 young people.